pip install uvuv pip install --system nox nox-uvuv nox -s dev.nox/dev.python download.py downloads raw CSV; cleaned data lives in data/processed/heart_disease_clean_binary.csv.python -c "from src.training import train_pipeline; train_pipeline('data/raw/heart_disease_raw.csv')" saves models/best_model.pkl and models/preprocessor.pkl.pytest tests/ -v for preprocessing sanity.uvicorn app:app --reload (or Docker/K8s options below). API docs at http://localhost:8000/docs.For a cleaner, assignment-aligned writeup, see the documentation pages under doc/. The built MkDocs site is available at site/index.html.
download.py.HeartDiseasePreprocessor in src/preprocessing.py.StandardScaler; categorical label encoding; column order enforced to avoid training/serving drift.logs/feature_importance.png.models/best_model.pkl; preprocessor saved to models/preprocessor.pkl.heart-disease-mlops.mlflow ui --host 0.0.0.0 --port 5000 → http://localhost:5000.nox -s mlflow_ui.Dockerfile builds FastAPI service with model artifacts mounted at /app/models; run via docker run -p 8000:8000 -v $(pwd)/models:/app/models:ro heart-disease-mlops:latest.k8s/deployment.yaml (deployment + service + HPA). Probes hit /health; service exposed as LoadBalancer. Prometheus annotations included for /metrics.curl http://localhost:8000/health and POST /predict with sample JSON from README.md.tests/; extend coverage to training and API schema as needed.docs/images/screenshots/.logs/api.log; set verbosity via LOG_LEVEL./metrics (Prometheus). Sample config: monitoring/prometheus.yml.docker run -p 9090:9090 -v $(pwd)/monitoring/prometheus.yml:/etc/prometheus/prometheus.yml prom/prometheus; optional Grafana: docker run -d -p 3000:3000 grafana/grafana (add Prom data source at http://host.docker.internal:9090).prometheus.io/* annotations for automatic scraping.screenshots/):
monitoring-targets.png (Prometheus targets page UP).monitoring-requests-total.png (sum by(endpoint) (heart_api_requests_total)).monitoring-requests-rate-v1.png and monitoring-requests-rate-v2.png (sum by(endpoint) (rate(heart_api_requests_total[1m]))).monitoring-metrics-terminal-view.png (/metrics output view).curl-call-api-health.png (API call/logging view)..github/workflows/ci.yml (lint + pytest).flowchart TD A[UCI Dataset CSV] --> B[download.py] B --> C[data/raw/heart_disease_raw.csv] C --> D[Preprocessing<br/>HeartDiseasePreprocessor] D --> E[Training<br/>Logistic Regression + Random Forest] E --> F[MLflow Tracking<br/>params/metrics/artifacts/models] E --> G[Saved Artifacts<br/>models/best_model.pkl<br/>models/preprocessor.pkl] G --> H[FastAPI Service<br/>app.py] H --> I["/predict"] H --> J["/health"] H --> K["/metrics"] K --> L[Prometheus/Grafana]
GET /health – readiness check; reports model load status.POST /predict – returns prediction, confidence, risk_level.GET /metrics – Prometheus scrape endpoint for request counts/latency.Note: For detailed instructions, code explanations, and screenshots, please refer to the full documentation site built with MkDocs located in the site/ directory. The site is served using github pages. Please access it via the link above.